Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures
نویسندگان
چکیده
منابع مشابه
Reinforced backpropagation for deep neural network learning
Standard error backpropagation is used in almost all modern deep network training. However, it typically suffers from proliferation of saddle points in high-dimensional parameter space. Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a good parameter region of better generalization capabilities, especially based on rough insights a...
متن کاملLearning Neural Network Architectures using Backpropagation
Deep neural networks with millions of parameters are at the heart of many state of the art machine learning models today. However, recent works have shown that models with much smaller number of parameters can also perform just as well. In this work, we introduce the problem of architecture-learning, i.e; learning the architecture of a neural network along with weights. We start with a large ne...
متن کاملTraining Deep Spiking Neural Networks Using Backpropagation
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signa...
متن کاملDeep Neural Network Architectures for Modulation Classification
In this work, we investigate the value of employing deep learning for the task of wireless signal modulation recognition. Recently in [1], a framework has been introduced by generating a dataset using GNU radio that mimics the imperfections in a real wireless channel, and uses 11 different modulation types. Further, a convolutional neural network (CNN) architecture was developed and shown to de...
متن کاملA successive overrelaxation backpropagation algorithm for neural-network training
A variation of the classical backpropagation algorithm for neural network training is proposed and convergence is established using the perturbation results of Mangasarian and Solodov. The algorithm is similar to the successive overrelaxation (SOR) algorithm for systems of linear equations and linear complementary problems in using the most recently computed values of the weights to update the ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Frontiers in Neuroscience
سال: 2020
ISSN: 1662-453X
DOI: 10.3389/fnins.2020.00119